Many practical applications, such as recommender systems and learning to rank, involve solving multiple similar tasks. One example is learning of recommendation policies for users with similar movie preferences, where the users may still rank the individual movies slightly differently. Such tasks can be organized in a hierarchy, where similar tasks are related through a shared structure. In this work, we formulate this problem as a contextual off-policy optimization in a hierarchical graphical model from logged bandit feedback. To solve the problem, we propose a hierarchical off-policy optimization algorithm (HierOPO), which estimates the parameters of the hierarchical model and then acts pessimistically with respect to them. We instantiate HierOPO in linear Gaussian models, for which we also provide an efficient implementation and analysis. We prove per-task bounds on the suboptimality of the learned policies, which show a clear improvement over not using the hierarchical model. We also evaluate the policies empirically. Our theoretical and empirical results show a clear advantage of using the hierarchy over solving each task independently.
translated by 谷歌翻译
Privacy noise may negate the benefits of using adaptive optimizers in differentially private model training. Prior works typically address this issue by using auxiliary information (e.g., public data) to boost the effectiveness of adaptive optimization. In this work, we explore techniques to estimate and efficiently adapt to gradient geometry in private adaptive optimization without auxiliary data. Motivated by the observation that adaptive methods can tolerate stale preconditioners, we propose differentially private adaptive training with delayed preconditioners (DP^2), a simple method that constructs delayed but less noisy preconditioners to better realize the benefits of adaptivity. Theoretically, we provide convergence guarantees for our method for both convex and non-convex problems, and analyze trade-offs between delay and privacy noise reduction. Empirically, we explore DP^2 across several real-world datasets, demonstrating that it can improve convergence speed by as much as 4x relative to non-adaptive baselines and match the performance of state-of-the-art optimization methods that require auxiliary data.
translated by 谷歌翻译
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP), owing to their excellent understanding and generation abilities. Remarkably, what further sets these models apart is the massive amounts of world knowledge they internalize during pretraining. While many downstream applications provide the model with an informational context to aid its performance on the underlying task, how the model's world knowledge interacts with the factual information presented in the context remains under explored. As a desirable behavior, an LLM should give precedence to the context whenever it contains task-relevant information that conflicts with the model's memorized knowledge. This enables model predictions to be grounded in the context, which can then be used to update or correct specific model predictions without frequent retraining. By contrast, when the context is irrelevant to the task, the model should ignore it and fall back on its internal knowledge. In this paper, we undertake a first joint study of the aforementioned two properties, namely controllability and robustness, in the context of LLMs. We demonstrate that state-of-the-art T5 and PaLM (both pretrained and finetuned) could exhibit poor controllability and robustness, which do not scale with increasing model size. As a solution, we propose a novel method - Knowledge Aware FineTuning (KAFT) - to strengthen both controllability and robustness by incorporating counterfactual and irrelevant contexts to standard supervised datasets. Our comprehensive evaluation showcases the utility of KAFT across model architectures and sizes.
translated by 谷歌翻译
大型预估计模型(例如GPT-3)取得了显着的性能,在训练过程中暴露于大量数据上。类似地,将如此大型模型提炼成紧凑的模型以进行有效的部署,也需要大量(标记或未标记的)培训数据。在本文中,我们提出了培训高质量紧凑型模型的教师指导培训(TGT)框架,该模型利用了预验证的生成模型获得的知识,同时避免了大量数据的需求。 TGT利用了教师获得基础数据域的良好表示的事实,该事实通常对应于比输入空间要低得多的尺寸歧管。此外,我们可以使用老师通过采样或基于梯度的方法来更有效地探索输入空间。因此,使TGT对于有限的数据或长尾设置特别有吸引力。我们正式在我们的概括范围内正式捕获了所提出的数据域探索的好处。我们发现TGT可以提高几个图像分类基准以及一系列文本分类和检索任务的准确性。
translated by 谷歌翻译
我们介绍了Art,这是一种新的语料库级自动编码方法,用于培训密集检索模型,不需要任何标记的培训数据。密集的检索是开放域任务(例如Open QA)的核心挑战,在该任务中,最先进的方法通常需要大量的监督数据集,并具有自定义的硬性采矿和肯定式示例。相反,艺术品仅需要访问未配对的投入和输出(例如问题和潜在的答案文件)。它使用新的文档 - 重新定义自动编码方案,其中(1)输入问题用于检索一组证据文档,并且(2)随后使用文档来计算重建原始问题的概率。基于问题重建的检索培训可以有效地学习文档和问题编码器,以后可以将其纳入完整的QA系统中,而无需任何进一步的填充。广泛的实验表明,ART在多个QA检索基准测试基准上获得最先进的结果,并且仅来自预训练的语言模型的一般初始化,从而消除了对标记的数据和特定于任务的损失的需求。
translated by 谷歌翻译
问题回答(QA)对知识库(KBS)的挑战是充满挑战的,因为所需的推理模式多样化,本质上是无限的,类型的推理模式。但是,我们假设以大型KB为基础,以回答各自子图中各个实体的查询类型所需的推理模式。利用不同子图的本地社区之间的这种结构相似性,我们引入了一个半参数模型(cbr-subg),(i)一个非参数组件,每个查询,每个查询,都会动态检索其他类似的$ k $ - $ - $ - $ - near-neart-tebrienk(KNN)培训查询以及查询特定的子图和(ii)训练的参数组件,该参数分量可以从KNN查询的子图中识别(潜在的)推理模式,然后将其应用于目标查询的子图。我们还提出了一种自适应子图收集策略,以选择特定于查询的compact子图,从而使我们可以扩展到包含数十亿个事实的完整freebase kb。我们表明,CBR-SUBG可以回答需要子图推理模式的查询,并在几个KBQA基准上的最佳模型竞争性能。我们的子图收集策略还会产生更多紧凑的子图(例如,webQSP的尺寸减小55 \%,而将答案召回的召回率增加4.85 \%)\ footNote {代码,模型和子码头可在\ url {https://github.com上获得。 /rajarshd/cbr-subg}}。
translated by 谷歌翻译
自适应优化方法已成为许多机器学习任务的默认求解器。不幸的是,适应性的好处可能会在具有不同隐私的训练时降低,因为噪声增加了,以确保隐私会降低自适应预处理的有效性。为此,我们提出了ADADP,这是一个使用非敏感的侧面信息来预处梯度的一般框架,从而可以在私有设置中有效使用自适应方法。我们正式显示ADADPS减少了获得类似隐私保证所需的噪声量,从而提高了优化性能。从经验上讲,我们利用简单且随时可用的侧面信息来探索实践中ADADP的性能,与集中式和联合设置中的强大基线相比。我们的结果表明,ADADP平均提高了准确性7.7%(绝对) - 在大规模文本和图像基准上产生最先进的隐私性权衡权衡。
translated by 谷歌翻译
与SGD相比,Adam等自适应梯度方法允许对现代深层网络(尤其是大型语言模型)进行强有力的培训。但是,适应性的使用不仅是为了额外的记忆,而且还提出了一个基本问题:SGD等非自适应方法可以享受类似的好处吗?在本文中,我们通过提议通过以下一般配方提议实现健壮和记忆效率的培训来为这个问题提供肯定的答案:(1)修改体系结构并使IT规模不变,即参数规模不影响。网络的输出,(2)使用SGD和重量衰减的训练,以及(3)剪辑全局梯度标准与重量标准成比例成正比,乘以$ \ sqrt {\ tfrac {\ tfrac {2 \ lambda} {\ eta}} {\ eta}}} $, $ \ eta $是学习率,而$ \ lambda $是权重腐烂。我们表明,这种一般方法是通过证明其收敛性仅取决于初始化和损失的规模来重新恢复参数和丢失的强大,而标准SGD甚至可能不会收敛许多初始化。在我们的食谱之后,我们设计了一个名为Sibert的Bert版本的比例不变版本,该版本仅由Vanilla SGD进行训练时,可以实现与Bert在下游任务中受过自适应方法训练的BERT相当的性能。
translated by 谷歌翻译
拍卖设计中的主要问题之一是开发一种兼容激励兼容的机制,可最大程度地提高拍卖师的预期收入。尽管理论方法在多项目拍卖中遇到了瓶颈,但最近在通过深度学习找到最佳机制方面取得了很多进展。但是,这些作品要么着重于固定的竞标者和项目,要么将拍卖限制为对称。在这项工作中,我们通过将投标人和项目的上下文信息考虑到拍卖学习框架中来克服此类限制。我们提出了$ \ mathtt {Citransnet} $,这是一种基于上下文集成变压器的神经网络,用于最佳拍卖设计,该网络在竞标和上下文上保持了置换率 - 等值,同时能够找到不对称的解决方案。我们通过广泛的实验表明,$ \ mathtt {citransnet} $可以在单项设置中恢复已知的最佳解决方案,在多项目拍卖中优于强大的基线,并且可以很好地推广到培训中的案例以外的其他案例。
translated by 谷歌翻译
元,多任务和联合学习可以全部被视为解决类似的任务,从反映任务相似之处的未知分发中汲取类似的任务。在这项工作中,我们提供了所有这些问题的统一视图,因为在分层贝叶斯匪徒中采取行动。我们分析了一种自然的分层汤普森采样算法(HIERTS),可以应用于此类中的任何问题。我们的遗憾界限在此类问题的许多情况下持有,包括当任务顺序或并行解决时;并捕获问题的结构,使得遗憾地随着任务的宽度而减少。我们的证据依赖于新的总方差分解,可以应用于其他图形模型结构。最后,我们的理论是由实验补充的,表明层次结构有助于任务之间的知识共享。这证实了分层贝叶斯匪徒是一种普遍和统计学的工具,用于学习与类似的匪徒任务进行行动。
translated by 谷歌翻译